21 research outputs found
To use or not to use: 3D documentation in fieldwork and in the lab
Excavating an archaeological site involves many decisions and trade-offs to be made. This is not only due to the limited resources on site but also has methodological roots. As excavations are destructive, decisions on the quantity and on which details documentation should focus must be taken. For documentation, a wide range of tools are available.
One such tool is 3D documentation. It brings with it many benefits like the possibility to move in a 3D space through the excavation, the generating of âvirtualâ profiles, or high precision DEMâs to name a few. 3D documentation of finds offers the possibility of fast unwrapping of complex features, fast drawings from different angles, or printing as a replica for different purposes.
However, when compared to conventional methods, more problematic issues arise as well. 3D models must be calculated before the excavation can proceed. The calculation either needs time or powerful hardware. The method also results in a vast amount of data, as, in contrast to a drawing, most details are recorded. The question here is how to balance and evaluate these pros and cons.
With this session, we want to ask: How and on which basis do you decide when choosing the documentation method? How do you evaluate the cost and time benefit of 3D documentation, also taking into account consequential costs such as data storage and curation over long periods of time? Is the time saved in the field doubled in the office? How much of the recorded data is used in the evaluation?
We aim to bring together archeologists, technicians, and restaurateurs concerned with the above stated questions, whether working in the field or the lab, or as decision-makers in managing positions. We highly encourage members of commercial archaeological enterprises and of heritage offices to bring in their points of view
Confidence and Uncertainty Assessment for Distributional Random Forests
The Distributional Random Forest (DRF) is a recently introduced Random Forest
algorithm to estimate multivariate conditional distributions. Due to its
general estimation procedure, it can be employed to estimate a wide range of
targets such as conditional average treatment effects, conditional quantiles,
and conditional correlations. However, only results about the consistency and
convergence rate of the DRF prediction are available so far. We characterize
the asymptotic distribution of DRF and develop a bootstrap approximation of it.
This allows us to derive inferential tools for quantifying standard errors and
the construction of confidence regions that have asymptotic coverage
guarantees. In simulation studies, we empirically validate the developed theory
for inference of low-dimensional targets and for testing distributional
differences between two populations
Treatment Effect Estimation from Observational Network Data using Augmented Inverse Probability Weighting and Machine Learning
Causal inference methods for treatment effect estimation usually assume
independent experimental units. However, this assumption is often questionable
because experimental units may interact. We develop augmented inverse
probability weighting (AIPW) for estimation and inference of causal treatment
effects on dependent observational data. Our framework covers very general
cases of spillover effects induced by units interacting in networks. We use
plugin machine learning to estimate infinite-dimensional nuisance components
leading to a consistent treatment effect estimator that converges at the
parametric rate and asymptotically follows a Gaussian distribution. We apply
our AIPW method to the Swiss StudentLife Study data to investigate the effect
of hours spent studying on exam performance accounting for the students' social
network
Integrating all Dimensions: 3D-Applications from Excavation to Research to Dissemination
3D-technologies are increasingly shaping the way archaeologists work and think. The fact that 3D recording techniques are becoming part of the standard toolkit in archaeological fieldwork opens up enormous opportunities for research and public outreach. As archaeological excavations are seen to be destructive, conventional documentation techniques have been shaped over decades if not centuries to mitigate as much information loss as possible. This includes the development of fitting tools and workflows as well as best practices in archaeological data collection, long-term archiving, research and dissemination.
As new tools, 3D-Technologies need to be implemented into these existing best practices and workflows. In order to take full advantage of the new possibilities, we consider an integrated approach from the beginning of a project to be essential. This enables the successful implementation of 3D-technologies in all stages: it is not only important during fieldwork, but also later during research or public outreach. There, for instance, challenges concerning interoperability or quality may arise and have to be coped with. Also, the irreversibility of archaeological excavations has to be met with the functioning of long-term archiving of mostly large and complex datasets.
Despite the increasing implementation of 3D-technologies in everyday archaeological practice, the overall experience of knowing what decisions to make and how they will affect the later possibilities and limitations is still developing. Nevertheless, there are ever more successful projects showing how 3D-techniques can be fully integrated into archaeological practice.
This session aims to bring these examples of integrated research projects to a broader archaeological audience. As these potent documentation techniques have found their way into everyday practice, a broad dissemination and discussion of their possibilities and arising challenges is urgently needed
A new Approach for Structure from Motion Underwater Pile-Field Documentation
For a pilot study carried out by the University of Bern together with local partners in Summer 2018 at the pile-dwelling site Bay of Bones (Rep. of Macedonia), a new workflow for underwater pile-field documentation was developed.
The site lies in shallow water of 3â5 meters depth and the most obvious constructive remains of the prehistoric settlement are thousands of wooden piles. The piles, mainly of oak and juniper, are excellently preserved in the lake sediments. The aim of the project was to document and sample 40 m2 surface area of the pile-field and the dendrochronological analysis of the samples.
Dendrochronological sampling requires cutting the top-ends of the piles and thus changes the preserved situation. Therefore beforehand documentation must ensure the localization of each pile on a map.
This calls for a method that ensures a) that every pile is distinctly labeled and b) the location of each pile is accurately captured. While on land, this can easily be achieved, underwater working conditions complicate common procedures. E.g. by measuring with a folding ruler from a local grid, there is later no way to evaluate measuring mistakes or the internal error of the local grid. In addition, for unpracticed divers measuring by hand underwater is not only time-consuming but also tends a lot more to erroneous results than on land.
The goal was therefore to find a time-saving, accurate and easy to carry out way to locate the positions of several hundred piles in shallow water. The best solution for us to achieve these goals was a new standardized and reproducible workflow with Structure from Motion (SfM). The applied approach for underwater SfM-documentation includes on-site workflow and post-processing.
The on-site workflow covers all steps from the preparation of the archaeological structures to the photographic data acquisition, the calculation of a preliminary 3D-model and its on-site verification. The crucial step was to ensure the suitability for modeling of the data before the situation underwater was irreversibly changed through sampling.
Post-processing was carried out in Adobe Photoshop, Agisoft PhotoScan and QGIS where the data was optimized in quality and standardized from digital image processing to the construction of a georeferenced orthomosaic. Applying these results, we can later visualize patterns in the spatial distribution of the piles concerning e.g. their age, their size or their wood species. This will lead to answers regarding architecture, internal chronology, and in-site settlement dynamics.
With this newly standardized two-step-workflow for underwater structure documentation, we are able to asses and compare the quality of each orthomosaic in a reproducible way. The presented method is highly promising for underwater-documentation of prehistoric pile-fields, yielding accurate digital plans in an efficient and cost-saving way.</p
Plug-in machine learning for partially linear mixed-effects models with repeated measurements
Traditionally, spline or kernel approaches in combination with parametric estimation are used to infer the linear coefficient (fixed effects) in a partially linear mixed-effects model for repeated measurements. Using machine learning algorithms allows us to incorporate complex interaction structures, nonsmooth terms, and high-dimensional variables. The linear variables and the response are adjusted nonparametrically for the nonlinear variables, and these adjusted variables satisfy a linear mixed-effects model in which the linear coefficient can be estimated with standard linear mixed-effects methods. We prove that the estimated fixed effects coefficient converges at the parametric rate, is asymptotically Gaussian distributed, and semiparametrically efficient. Two simulation studies demonstrate that our method outperforms a penalized regression spline approach in terms of coverage. We also illustrate our proposed approach on a longitudinal dataset with HIV-infected individuals. Software code for our method is available in the R-package dmlalg.ISSN:0303-6898ISSN:1467-946
Double Machine Learning for Partially Linear Mixed-Effects Models with Repeated Measurements
Traditionally, spline or kernel approaches in combination with parametric
estimation are used to infer the linear coefficient (fixed effects) in a
partially linear mixed-effects model for repeated measurements. Using machine
learning algorithms allows us to incorporate complex interaction structures and
high-dimensional variables. We employ double machine learning to cope with the
nonparametric part of the partially linear mixed-effects model: the nonlinear
variables are regressed out nonparametrically from both the linear variables
and the response. This adjustment can be performed with any machine learning
algorithm, for instance random forests, which allows to take complex
interaction terms and nonsmooth structures into account. The adjusted variables
satisfy a linear mixed-effects model, where the linear coefficient can be
estimated with standard linear mixed-effects techniques. We prove that the
estimated fixed effects coefficient converges at the parametric rate, is
asymptotically Gaussian distributed, and semiparametrically efficient. Two
simulation studies demonstrate that our method outperforms a penalized
regression spline approach in terms of coverage. We also illustrate our
proposed approach on a longitudinal dataset with HIV-infected individuals.
Software code for our method is available in the R-package dmlalg
Regularizing double machine learning in partially linear endogenous models
The linear coefficient in a partially linear model with confounding variables can be estimated using double machine learning (DML). However, this DML estimator has a two-stage least squares (TSLS) interpretation and may produce overly wide confidence intervals. To address this issue, we propose a regularization and selection scheme, regsDML, which leads to narrower confidence intervals. It selects either the TSLS DML estimator or a regularization-only estimator depending on whose estimated variance is smaller. The regularization-only estimator is tailored to have a low mean squared error. The regsDML estimator is fully data driven. The regsDML estimator converges at the parametric rate, is asymptotically Gaussian distributed, and asymptotically equivalent to the TSLS DML estimator, but regsDML exhibits substantially better finite sample properties. The regsDML estimator uses the idea of k-class estimators, and we show how DML and k-class estimation can be combined to estimate the linear coefficient in a partially linear endogenous model. Empirical examples demonstrate our methodological and theoretical developments. Software code for our regsDML method is available in the R-package dmlalg.ISSN:1935-752
Diving into Research. A Talk about the NEENAWA Scientific Diving Course and a resulting new Project at Lake Ohrid.
A central part of the Institutional Partnership (SCOPES) âNetwork in Eastern
European Neolithic and Wetland Archaeologyâ (NEENAWA, 2015â2018) was a
European Scientific Diver (ESD) course, realized in summer 2017. Together with
participants from Russia, the Ukraine and the FY Republic of Macedonia we, four
Bernese students, successfully absolved the examination which was held under
the conditions of the German commission for Scientific Diving (KFT).
The first part of this presentation will show what it means to be trained as
a scientific diver under European law, to give an idea of what we did during our
course and what the advantages of an education within the framework of the
European Scientific Diving Panel are. The course has been conducted at the Bay
of Bones, a Bronze Age pile dwelling settlement on the shore of Lake Ohrid, FY
Republic of Macedonia.
The second part gives an outlook on the new prospects that the ESD course
opened for us. With colleagues we met during this course and NEENAWA
project, we have started to plan new research activities. The aim was to apply
scientific diving as a method to bring forward dendrochronology where it has not
been used so far. We chose the Bay of Bones at Lake Ohrid as research site.
During the ESD-course a small survey was carried out which already raised several
questions we want to explore further. In about 5 m depth lie well-preserved
cultural layers with thousands of piles and artifacts. Until now the chronology of
this site is mainly based on ceramic typology.
The goal of the project is to change this by applying combined dendrochronology
and radiocarbon dating. As a method, photogrammetry together with a
combination of a standard grid on the lake floor and DGPS will be used. This
allows systematic, fast documentation resulting in a georeferenced map of the
sampled piles